Pathwise Stochastic Control Problems and Stochastic HJB Equations
نویسندگان
چکیده
In this paper we study a class of pathwise stochastic control problems in which the optimality is allowed to depend on the paths of exogenous noise (or information). Such a phenomenon can be illustrated by considering a particular investor who wants to take advantage of certain extra information but in a completely legal manner. We show that such a control problem may not even have a “minimizing sequence,” but nevertheless the (Bellman) dynamical programming principle still holds. We then show that the corresponding Hamilton–Jacobi–Bellman equation is a stochastic partial differential equation, as was predicted by Lion and Souganidis [C. R. Acad. Sci. Paris Sér. I Math., 327 (1998), pp. 735–741]. Our main device is a Doss–Sussmann-type transformation introduced in our previous work [Stochastic Process. Appl., 93 (2001), pp. 181–204] and [Stochastic Process. Appl., 93 (2001), pp. 205–228]. With the help of such a transformation we reduce the pathwise control problem to a more standard relaxed control problem, from which we are able to verify that the value function of the pathwise stochastic control problem is the unique stochastic viscosity solution to this stochastic partial differential equation, in the sense of [Stochastic Process. Appl., 93 (2001), pp. 181–204] and [Stochastic Process. Appl., 93 (2001), pp. 205–228].
منابع مشابه
Stochastic control with rough paths
We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of T. Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically invol...
متن کاملContinuous dependence on coefficients for stochastic evolution equations with multiplicative Levy Noise and monotone nonlinearity
Semilinear stochastic evolution equations with multiplicative L'evy noise are considered. The drift term is assumed to be monotone nonlinear and with linear growth. Unlike other similar works, we do not impose coercivity conditions on coefficients. We establish the continuous dependence of the mild solution with respect to initial conditions and also on coefficients. As corollaries of ...
متن کاملStochastic Optimal Control Problems with a Bounded Memory∗
This paper treats a finite time horizon optimal control problem in which the controlled state dynamics is governed by a general system of stochastic functional differential equations with a bounded memory. An infinite-dimensional HJB equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation. I...
متن کاملNumerical Methods for Nonlinear PDEs in Finance
Many problems in finance can be posed in terms of an optimal stochastic control. Some well-known examples include transaction cost/uncertain volatility models [17, 2, 25], passport options [1, 26], unequal borrowing/lending costs in option pricing [9], risk control in reinsurance [23], optimal withdrawals in variable annuities[13], optimal execution of trades [20, 19], and asset allocation [28,...
متن کاملUniqueness results for convex Hamilton - Jacobi equations under p > 1 growth conditions on data
Unbounded stochastic control problems may lead to Hamilton-Jacobi-Bellman equations whose Hamiltonians are not always defined, especially when the diffusion term is unbounded with respect to the control. We obtain existence and uniqueness of viscosity solutions growing at most like o(1+ |x|p) at infinity for such HJB equations and more generally for degenerate parabolic equations with a superli...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM J. Control and Optimization
دوره 45 شماره
صفحات -
تاریخ انتشار 2007